246 research outputs found

    Spectral norm of random tensors

    Full text link
    We show that the spectral norm of a random n1Γ—n2Γ—β‹―Γ—nKn_1\times n_2\times \cdots \times n_K tensor (or higher-order array) scales as O((βˆ‘k=1Knk)log⁑(K))O\left(\sqrt{(\sum_{k=1}^{K}n_k)\log(K)}\right) under some sub-Gaussian assumption on the entries. The proof is based on a covering number argument. Since the spectral norm is dual to the tensor nuclear norm (the tightest convex relaxation of the set of rank one tensors), the bound implies that the convex relaxation yields sample complexity that is linear in (the sum of) the number of dimensions, which is much smaller than other recently proposed convex relaxations of tensor rank that use unfolding.Comment: 5 page

    Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness

    Full text link
    We investigate the learning rate of multiple kernel learning (MKL) with β„“1\ell_1 and elastic-net regularizations. The elastic-net regularization is a composition of an β„“1\ell_1-regularizer for inducing the sparsity and an β„“2\ell_2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large, but the number of nonzero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates have ever shown for both β„“1\ell_1 and elastic-net regularizations. Our analysis reveals some relations between the choice of a regularization function and the performance. If the ground truth is smooth, we show a faster convergence rate for the elastic-net regularization with less conditions than β„“1\ell_1-regularization; otherwise, a faster convergence rate for the β„“1\ell_1-regularization is shown.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1095 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: text overlap with arXiv:1103.043

    Sparsity-accuracy trade-off in MKL

    Full text link
    We empirically investigate the best trade-off between sparse and uniformly-weighted multiple kernel learning (MKL) using the elastic-net regularization on real and simulated datasets. We find that the best trade-off parameter depends not only on the sparsity of the true kernel-weight spectrum but also on the linear dependence among kernels and the number of samples.Comment: 8pages, 2 figure
    • …
    corecore